What I learned from doing research during the pandemic

Public inquiries into COVID-19 policy making can tell us a lot about how robust our society’s plumbing is and whether evidence-based policy making is a lived practice – let’s not waste that opportunity

The pandemic was a black swan event. Not surprisingly, policy makers and governments across the world struggled handling it. We have seen significant differences in the approaches to the handling of the pandemic across countries. It will be impossible to systematically evaluate this in a clean fashion – unlike other, more predictable shocks.

Yet, the pandemic has highlighted that the plumbing of our societies – in particular the flows of information, data and its analysis – is far from ideal. This undermines our collective ability to respond to crises effectively with agility and speed. 

Given the spectacle of how we collectively handled this crisis — the failure of effective climate action seems much less surprising. It also highlighted that politics and government can actively cause harm either through action or deliberate inaction. And lastly, it suggests that processes of societal learning are either deliberately or accidentally obstructed.

Digital plumbing and ICT skills – hindering the flows of information to make for more efficiently organized societies

I posit that a central reason for this is simple: much of the plumbing of our societies is stuck using 20th century technologies. The ICT revolution has never fully arrived in the public sector. As a result, owing to poor public data infrastructure and poor ICT skills more broadly, evidence-based policy making is still only an aspiration rather than a lived reality. And most of the West has a lot of “catch up” to do, in particular vis-a-vis emerging market countries.

Exactly how Western societies play catch up is a social question. It is clear though that data governance around information flows is vital to ensure that the vast potential for administrative data to bring about healthier and richer societies is tapped into. But its also key to weigh off these trade-offs carefully. The demand for privacy is primarily something valued by the rich or those whose origin of wealth is luck rather than work. It is hard. 

My research journey during the pandemic

But let me just illustrate how, through my research, my thinking on these questions has evolved.  In 2020 and 2021, I wrote three research papers. What is remarkable about each of them is that I produced robust scientific evidence on the (un)intended consequences of accidents, errors or policy mishaps – within weeks, not months or years after the fact. The soundness of my research has now been confirmed in two out of three cases.

This highlights that agile research and the production of robust near-time evidence to inform decision making is possible. 

Observation 1: Socialization of harm, privatization of gains

The first research paper shone a light on the (absence) of the lived practice of evidence-based policy making.  In the handling of the pandemic, across countries, it was proclaimed that policy making should be informed by science. But was it?  A bit more than two years ago I published a research paper that resonated with many people both in the UK, and far beyond. My research paper studied the epidemiological – and to a lesser extent the economic impact – of a UK government scheme aimed to revive economic activity on high streets. Nearly 1 billion pounds in taxpayer money was used to subsidize eating out in restaurants in August 2020: a hefty 50% discount was on offer. It does not require public health expertise to suspect the obvious: the work, published just two months after the fact, documented quite cleanly by any research standards that this scheme was causing more COVID19 infections. Of course, it shouldn’t really have taken any research to come to that conclusion.

Social mixing was actively encouraged in a broadly unprotected population.  This happened, despite a vaccination being within sight. There are some reasons to think that the scheme may have been informed by considerations that may have little to do with economics. I will not speculate here but I have my private views on the matter. The UK government’s first reaction to the research was one of denial. In subsequent leaks of internal communication, it became obvious that policy makers were informed – before and during the scheme being run – of the adverse impact it (would|was) causing more infections, ultimately seeding the much deadlier second wave of the pandemic in fall.  It also became clear that policy makers may have carried out a real-time intervention, most likely through targeted social media advertisement. This may be good practice in general, but it merits some societal debate on the merits of different tools to be leveraged for building evidence on what works. And importantly, naturally any intervention that is being tested in such a way should pass a “sniff test”.

Now, there may have been good reasons why the scheme was run nevertheless, e.g. for concerns about financial stability (I should have some research on this in not too long). Healthy discourse and transparent debate of the underlying evidence that was used, at the time that these decisions were made, is vital to ensure lessons can be learned.

The omnipresent “do nothing” counterfactual

From my experience speaking with public sector entities, media and (some) people that advise governments, it is obvious that many struggle with telling apart what is high quality scientific evidence from bad evidence. Most also struggle with formulating what counterfactuals are – other than the “do nothing scenario”. A counterfactual can be thought of the best way of capturing what would happen in different states of the world.

Policy choices should be chosen from a menu of options and should be compared against one another in relation to a “do nothing scenario”.  This is how politics unfolds.

Observation 2: An over-reliance on fragile ICT systems is deeply concerning 

The second research paper shines a light on the fragile nature of the ICT plumbing of some of our western societies. This paper may well go down in history as the most bizarre natural experiments ever studied. Due to a data processing error, nearly 16,000 COVID cases were not referred to the centrally administered contact tracing system. Individuals that tested positive were not actively contacted to remind them to self isolate and their contacts have not been traced because an Excel spreadsheet that contained the data was reaching a row limit. Their case information was simply truncated off the spreadsheet. Tim Hartford took this research as a source of inspiration that got him to write up a long story about the origins and uses of Excel software.

In my research paper I use this as a natural experiment to study whether contact tracing has an effect, because which COVID cases got truncated from the spreadsheet was quasi random. And I show that it does matter. Not tracing these 16,000 COVID positive cases may have caused up to 1,500 avoidable deaths.

The data processing error was noticed in early October 2020 – I was still working on the paper on Eat-out-to-help-out, but worked in parallel on this new paper which was first circulated mid November. 

The omnipresence of “Excel”

Software tools such as Excel are the most commonly used tools in consulting and government. But most experts handling large data sets would agree with the fact that it is woefully inadequate software to handle large datasets.  It also is not suitable to analyse data or perform more sophisticated counterfactual analysis with said software.

From my own experience interacting with governments and public sectors, most are woefully ill equipped to process, let alone analyze data rigorously. This is often owing to public sector employees having a poor understanding of data and analytical tools. And of course, it remains important to understand the underlying data generating process – is the underlying data capturing what it is intended to capture, or is it itself, the product of some skewed incentives.  

Obviously, for the societal processing of how such an error could have happened it is important to ask these questions. From a scientific standpoint, my research went through the peer review and is published. There has been no investigation on the side of the government. Albeit, I was invited to present my research to a research unit within the Department for Health and Social Care. But, given that in the UK and elsewhere billions of public funds have been spent on private contractors building, among others, a contact tracing system that failed so quickly. There are significant concerns that, owing to a lack of public sector competencies in this domain, the task was outsourced to suppliers who themselves did not have the right skills (or incentives). And of course, even more importantly, we need to understand why these skills are lacking in the public sector. I have my own hypotheses here on why that is the case. 

Observation 3: Only some scandals are investigated — those that do not point to political or government dysfunction

The last piece of research relates to the role of the private sector and social learning — the only proper investigation of the range of “scandals” that I studied involved one case where the government does not have a conflict of interest and the blame could easily be attributed to a rogue private sector company (neither Eat-out-to-help-out nor the Excel mishap has been investigated — and that is why I think a robust inquiry is so important).

Mobilizing the dynamism of the private sector to bring about quick solutions quickly to deal with challenges such as the pandemic can be vital. But, at the same time, it is important that there is strong and robust regulatory oversight and, in particular in matters concerning public good provision, that there are open data interfaces and common data standards that facilitate seamless data integration. Quick fixes together with poor incentive structure can produce perverse outcomes. In that sense, the public sector can define data sharing standards such  that firms supplying their services have no opportunity to lock out competition through the use of proprietary data platforms. 

Immensa Testing Scandal

In summer 2021, as the vaccination rollout had culminated, a private test provider appears to have systematically produced false negative COVID19 test results. Typically, people would have done a rapid COVID test to detect whether an infection was present or not. But they were required to do a confirmatory PCR test, which also aided the detection and tracking of new COVID19 strains – a vital public health monitoring innovation. Around 40,000 individuals were told conflicting information: a rapid test indicated they were infectious, but the lab-based PCR test produced a false negative.  

I used this shock as a natural experiment because, again, there is a sharp regional signature with the South West of England being much more affected. I estimated that, for every COVID19 false negative test, there may have been up to 2 avoidable COVID19 infections and cumulatively, there could have been up to 100 avoidable deaths. 

I published this paper within two weeks of this testing error being reported on in the press. Nearly a year later, a proper public health investigation was published that was co-authored by 13 researchers. They use, in essence, the same methodology and confirm my findings that were published within weeks after the fact.

This need not be the case as granular individual level data was available. But, most likely, the public health experts or the supervisory agency did not have the technical skills or analytical resources to carry out proper forensic econometric assessments for quality control. There were also issues around data and communication that raise these challenges.  The fact that it took 13 researchers and nearly a year delay to come up with the same findings using the same techniques highlights some systemic flaws. But this was, out of the three research exercises, the only one that was properly investigated.

The bigger picture

My work and that of many in my generation of economists is inspired by the evidence-based policy making revolution that was developed and championed by Nobel laureates such as Abhijit Banerjee, Esther Duflo, Michael Kremer, David Card, Joshua Angrist and Guido Imbens.

The evidence-based policy making approach is not a single model. It’s a continuous and iterative process that requires at least three inputs: high quality data, a skilled workforce and a social consensus that learning is important to facilitate the flow of data. My research focus is on understanding to what extent evidence-based policy making is a lived practice. Naturally, this process needs to adhere to best practices that are a principle to modern research practice: an ethics framework; governance; transparency and replicability and reproducibility.

I am convinced that public investment in data infrastructure, technology and skills is at the heart of this. Liberal democracies need to sketch out an alternative path to harness technology and data for a social good. And that requires (re)building trust. And that starts with an honest introspection of what went wrong. What were the lessons learned? And how to get better. Less noise and less drama would do us – and the planet – all good.